Skip to content

Disable auth-failing account groups instead of removing them#78

Closed
ndycode wants to merge 308 commits intomainfrom
fix/disable-auth-failure-accounts
Closed

Disable auth-failing account groups instead of removing them#78
ndycode wants to merge 308 commits intomainfrom
fix/disable-auth-failure-accounts

Conversation

@ndycode
Copy link
Copy Markdown
Owner

@ndycode ndycode commented Mar 13, 2026

What:

  • disable same-refresh-token account groups after repeated auth refresh failures instead of removing them from the pool
  • persist disabledReason so explicit login and flagged-account restore revive only auth-failure disables while preserving manual disables
  • block manual re-enable of auth-failure disables until fresh login, while clearing stale cooldown metadata on real manual re-enable
  • serialize account saves, flush pending debounced saves during shutdown, and harden overlapping-save / transient file-lock recovery
  • normalize storage, migration, and schema handling for enabled and disabledReason while preserving the canonical "omit unset optional fields" storage shape

Why:

  • keep bad account groups visible and recoverable instead of silently shrinking the pool
  • avoid login or menu actions undoing manual disables
  • reduce disabled-state loss during overlapping saves, transient Windows file locks, cache invalidation, and shutdown
  • keep re-auth guidance observable without logging raw user identifiers

Validation:

  • npm run typecheck
  • npm run lint
  • npm test
  • npm run build

Summary by CodeRabbit

  • New Features

    • Persisted account disable reasons with ability to revive matching disabled accounts; UI and selection now show enabled-account counts.
  • Bug Fixes

    • Improved shutdown cleanup to flush pending saves reliably and surface errors.
    • Persistence, cache reloads, account-switching, and rotation now consistently respect enabled/disabled state and revive tokens on restore/refresh.
  • Tests

    • Expanded test coverage for disable/revive flows, shutdown cleanup, persistence, selection, and edge cases.

note: greptile review for oc-chatgpt-multi-auth. cite files like lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.

Greptile Summary

this pr replaces silent account removal after repeated auth failures with a persistent disable/revive flow, hardens save serialization against windows file-lock races, and adds a shutdown deadline to flush debounced writes before exit.

key changes:

  • disableAccountsWithSameRefreshToken marks accounts enabled: false, disabledReason: "auth-failure" in-memory and persists via the new serialized enqueueSave queue instead of silently dropping them from the pool
  • flushPendingSave gains a retry loop (max 20 iterations) that replays saveRetryNeeded-flagged writes after transient EBUSY failures, with an optional deadlineMs for shutdown callers
  • lib/shutdown.ts switches from process.once to process.on with explicit deregistration, adds a 10 s windows-aware deadline before process.exit, and passes deadlineMs to cleanup fns so the account manager flush can race the deadline cleanly
  • revival logic in persistAccountPool correctly guards on account.disabledReason !== "auth-failure" before clearing the disable, preserving manual user-disables across logins
  • mergedDisabledReason in the account-merge path gives "user" priority over "auth-failure", preventing a stale disk state from making a user-disabled account auto-revivable
  • storage normalization (normalizeStoredEnabled, normalizeStoredAccountDisabledReason) and the V1→V3 migration default legacy disabled accounts to "user" so they are never mistakenly auto-revived
  • two minor issues: an unreachable else branch in trySetAccountEnabled (the early-return guard above it already covers the case) and a redundant final check in the flushPendingSave loop (both branches that could make it false are already handled with continue above)

Confidence Score: 4/5

  • safe to merge — no functional regressions found; only minor dead code and a redundant guard
  • the pr is well-tested with explicit windows EBUSY simulation, concurrent-write serialization tests, revival/manual-disable round-trips, and shutdown deadline coverage. the two issues found (unreachable else branch, redundant guard) are code quality concerns that do not affect runtime behavior. the windows file-lock concurrency story is explicit and covered by tests.
  • lib/accounts.ts — unreachable else branch in trySetAccountEnabled (line 983) and redundant final guard in flushPendingSave (line 1232)

Important Files Changed

Filename Overview
lib/accounts.ts core of the PR — adds disableAccountsWithSameRefreshToken, serialized enqueueSave queue, flushPendingSave retry loop, trySetAccountEnabled with disabledReason guard, and getEnabledAccountCount. contains one unreachable else branch and a redundant final guard in flushPendingSave; windows file-lock retry logic and save serialization are solid.
index.ts wires up shutdown tracking, switches all cache updates to setCachedAccountManager, adds revival logic guarded on disabledReason === "auth-failure", and improves zero-enabled-account error messages; module-level accountManagerCleanupHook ??= pattern is sound but re-registers on every plugin instantiation.
lib/shutdown.ts replaces process.once with process.on + explicit deregistration, adds signalExitPending guard to prevent double-exit, passes deadlineMs context to cleanup functions, and adds 10 s windows-aware timeout before force-exit; logic is correct.
lib/storage.ts extends normalizeAccountStorage and normalizeFlaggedStorage to call normalizeStoredEnabled / normalizeStoredAccountDisabledReason; preserves the "omit unset optional fields" storage shape cleanly.
lib/storage/migrations.ts adds AccountDisabledReason type and normalization helpers; V1→V3 migration correctly defaults missing reasons to "user" while V3 normalization leaves them undefined (preventing auto-revival of legacy disables).
test/accounts.test.ts comprehensive new tests for disable/revive flows, save serialization under simulated EBUSY errors, trySetAccountEnabled edge cases, and flushPendingSave concurrency; covers the windows file-lock retry path explicitly.

Sequence Diagram

sequenceDiagram
    participant RT as Request Handler
    participant AM as AccountManager
    participant Disk as accounts.json (Windows FS)
    participant SH as shutdown.ts

    RT->>AM: selectAccount() [enabled only]
    AM-->>RT: account (enabled !== false)

    RT->>AM: incrementAuthFailures(account)
    AM-->>RT: failures >= MAX_AUTH_FAILURES_BEFORE_DISABLE

    RT->>AM: disableAccountsWithSameRefreshToken(account)
    Note over AM: sets enabled=false, disabledReason="auth-failure"<br/>clears cooldown, resets failure counter
    AM-->>RT: disabledCount

    RT->>AM: saveToDiskDebounced()
    Note over AM: saveRetryNeeded=true, debounce armed

    Note over Disk: Windows AV lock may block write
    AM->>Disk: enqueueSave → persistToDisk()
    alt EBUSY / file lock
        Disk-->>AM: Error (EBUSY)
        Note over AM: saveRetryNeeded stays true
    else success
        Disk-->>AM: ok
        Note over AM: saveRetryNeeded=false
    end

    Note over SH: SIGINT / SIGTERM received
    SH->>AM: flushPendingSave({ deadlineMs: now+10s })
    loop until settled or deadline
        AM->>Disk: retry persistToDisk()
        Disk-->>AM: ok
    end
    SH->>SH: process.exit(0)

    Note over RT: User runs `opencode auth login`
    RT->>AM: persistAccountPool(results, false,<br/>{ reviveMatchingDisabledAccounts: true })
    Note over AM: merge → disabledReason="auth-failure" preserved<br/>revival check clears enabled/disabledReason
    AM->>Disk: save (account revived)
Loading

Comments Outside Diff (8)

  1. lib/shutdown.ts, line 28-31 (link)

    signalExitPending never resets — silently swallows all future signals

    once signalExitPending = true is set by the first signal, it is never reset. if process.exit is intercepted (mocked in tests, or process somehow survives past requestExit), every subsequent SIGINT/SIGTERM hits the early-return guard and is silently dropped forever. removeShutdownHandlers is also skipped in runCleanup's .finally() when signalExitPending is true, so the registered process.on listeners accumulate and are never removed for that module instance.

    current tests use vi.resetModules() inside signal-handler tests to get fresh state, so this doesn't bite the test suite today. but any future test that calls registerCleanup → triggers a signal path → then calls runCleanup() programmatically without a module reset will find signals permanently ignored.

    consider also resetting the flag in the cleanup .finally() block when !signalExitPending is false but cleanup completed successfully without an actual process.exit call (i.e., mocked-exit path).

  2. lib/accounts.ts, line 784-792 (link)

    resolvePendingSaveSettledWaiters skips resolution while debounce timer is still armed

    hasPendingSave() returns true when saveDebounceTimer !== null. so if a waiter calls waitForPendingSaveToSettle() while only a debounce timer is pending (no in-flight pendingSave yet), and the timer fires, kicks off enqueueSave, the save completes, and then saveToDiskDebounced is immediately called again — the waiter registered before that second arm will stay in pendingSaveSettledWaiters until another full save completes.

    in scheduleTrackedManagerPrune, the comment says "re-registering this waiter after each settle is intentional," which handles the multi-arm case for the shutdown path. but any waiter that lands while the debounce is live must survive through at least one complete enqueueSave.finally() cycle; if the debounce is then cancelled by flushPendingSave before it fires, and flushPendingSave calls enqueueSave directly, resolvePendingSaveSettledWaiters will run and correctly drain them. this is fine today, but worth a unit test covering: register waiter → debounce → flushPendingSave cancels debounce → waiter resolves without a second debounce cycle.

  3. lib/shutdown.ts, line 3 (link)

    5-second signal timeout may be too tight on Windows

    SIGNAL_CLEANUP_TIMEOUT_MS = 5_000 is the total budget for all cleanup work, including flushPendingSave() on every tracked and active manager. on Windows with antivirus scanners, a single write can be blocked for 2-4 seconds. if there are two or more managers with pending saves (e.g., after a plugin reload), the timeout can fire before all saves flush — and process.exit(0) runs while disabled state is still unwritten to disk.

    the save-queue serialization in flushPendingSave means managers flush one after another, not in parallel, so the risk compounds with manager count. consider raising this, or making it configurable:

    also, no test currently exercises the path where the 5s timeout races against a slow flushPendingSave. adding a test that intercepts process.exit and verifies the warn-and-continue behavior on timeout would close this windows concurrency gap.

  4. lib/shutdown.ts, line 3-4 (link)

    Windows timeout vs flush-iteration budget mismatch

    SIGNAL_CLEANUP_TIMEOUT_MS = 10_000 is the hard wall for shutdown cleanup, but flushPendingSave has MAX_FLUSH_ITERATIONS = 20 and will keep re-queuing a new enqueueSave each iteration when a save is re-armed. each iteration awaits the previous in-flight write. on windows hosts where AV scanners hold the accounts file for 2–5 s per attempt, 20 iterations can easily exceed 10 s, causing requestExit() to fire mid-flush and lose the newly disabled state.

    consider either shrinking MAX_FLUSH_ITERATIONS or tying the flush loop to a wall-clock deadline so the two budgets stay in sync:

    // pseudo-code alignment
    const FLUSH_DEADLINE_MS = SIGNAL_CLEANUP_TIMEOUT_MS * 0.8; // leave 20% margin for process.exit

    or document that MAX_FLUSH_ITERATIONS is intentionally set low enough that it cannot exceed the signal timeout on realistic windows hardware.

  5. index.ts, line 1688-1698 (link)

    scheduleTrackedManagerPrune waiter not re-registered after settle with new pending work

    in the .finally callback of waitForSettle:

    trackedManagerSettledWaits.delete(manager);
    if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
        trackedAccountManagersForCleanup.delete(manager);
    }

    if managerHasPendingSave(manager) is true at the time .finally runs (because a new debounce save was re-armed between the settle and the callback), the manager correctly stays in trackedAccountManagersForCleanup, but trackedManagerSettledWaits has already been cleared. scheduleTrackedManagerPrune will not be called again unless setCachedAccountManager or invalidateAccountManagerCache fire for this manager.

    in practice this is a memory-only issue — the manager is still in the tracked set and will be flushed by the shutdown hook — but the auto-pruning that was supposed to clean up the set after the work is done never fires. over many reload cycles this keeps stale AccountManager objects alive until the next cache invalidation. consider re-calling scheduleTrackedManagerPrune(manager) inside the .finally when a new pending save is detected:

    }).finally(() => {
        trackedManagerSettledWaits.delete(manager);
        if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
            trackedAccountManagersForCleanup.delete(manager);
        } else if (
            managerHasPendingSave(manager) &&
            trackedAccountManagersForCleanup.has(manager) &&
            !activeAccountManagersForCleanup.has(manager)
        ) {
            // Re-arm the waiter so the set is pruned after the new save settles.
            scheduleTrackedManagerPrune(manager);
        }
    });
  6. lib/accounts.ts, line 1035-1040 (link)

    flushPendingSave is a no-op when both timer and pendingSave are null after a failed save

    the flushPendingSave entry check is:

    const hadDebouncedSave = !!this.saveDebounceTimer;
    if (this.saveDebounceTimer) { clearTimeout(...); ... }
    if (this.pendingSave) { await ...; }
    if (!hadDebouncedSave) { return; }   // ← exits here

    after saveToDiskDebounced fires and enqueueSave's inner save rejects (e.g. EBUSY), the .finally() sets this.pendingSave = null. the debounce timer was already cleared when it fired. so at shutdown, both saveDebounceTimer === null and pendingSave === null. flushPendingSave immediately returns with nothing to flush, and the in-memory disabled state is lost on restart.

    the test "persists newer disabled state after two queued save failures" works because it manually re-arms the debounce before calling flush. the production shutdown path in index.ts calls flushPendingSave without re-arming. adding a no-pending-state test would make this gap visible:

    it("returns without saving when both timer and pendingSave are null after a prior failure", async () => {
      // debounce fires → save fails → both cleared → flush called by shutdown hook → no-op
      mockSaveAccounts.mockRejectedValueOnce(new Error("EBUSY"));
      manager.saveToDiskDebounced(0);
      await vi.advanceTimersByTimeAsync(0);
      await drainMicrotasks(10);  // let failure settle
      // both timer and pendingSave are now null
      await manager.flushPendingSave();  // should at least warn or retry
      expect(mockSaveAccounts).toHaveBeenCalledTimes(1); // only the failed attempt
    });

    this is the windows AV file-lock gap: save fails silently, shutdown flush finds nothing, disabled state reloads as enabled. adding an explicit retry or a "dirty" flag on save failure would close the loop.

  7. lib/shutdown.ts, line 88-118 (link)

    Second SIGINT/SIGTERM silently swallowed during cleanup window

    the switch from process.once to process.on + if (signalExitPending) return means a second Ctrl+C within the 10-second cleanup window is a silent no-op. previously, a second signal would hit the OS default handler and force-kill the process — a deliberate user escape hatch. now, users have no visible feedback and no way to interrupt a hung cleanup; they must wait the full SIGNAL_CLEANUP_TIMEOUT_MS (10 s) before the process exits via the deadline timer.

    consider logging a message on the second signal so the user knows cleanup is in progress:

  8. lib/accounts.ts, line 980-985 (link)

    Unreachable else branch — dead code

    the else { account.enabled = false; } branch can never execute. to reach it you need !enabled, !reason, and account.disabledReason === "auth-failure" — but that exact combination is already handled by the early return at line 966-968:

    if (!enabled && account.disabledReason === "auth-failure") {
        account.enabled = false;
        return { ok: true, account };
    }

    so the final else is dead. this won't cause a runtime bug today, but it misleads future readers into thinking there is a live code path and can mask a refactor that accidentally removes the early-return guard. suggest removing the dead branch:

Fix All in Codex

Prompt To Fix All With AI
This is a comment left during a code review.
Path: lib/accounts.ts
Line: 980-985

Comment:
**Unreachable `else` branch — dead code**

the `else { account.enabled = false; }` branch can never execute. to reach it you need `!enabled`, `!reason`, and `account.disabledReason === "auth-failure"` — but that exact combination is already handled by the early return at line 966-968:

```ts
if (!enabled && account.disabledReason === "auth-failure") {
    account.enabled = false;
    return { ok: true, account };
}
```

so the final `else` is dead. this won't cause a runtime bug today, but it misleads future readers into thinking there is a live code path and can mask a refactor that accidentally removes the early-return guard. suggest removing the dead branch:

```suggestion
		} else if (account.disabledReason !== "auth-failure") {
			account.enabled = false;
			account.disabledReason = "user";
		}
		return { ok: true, account };
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: lib/accounts.ts
Line: 1229-1234

Comment:
**Redundant final guard in `flushPendingSave`**

the `if (this.saveDebounceTimer === null && this.pendingSave === null) { return; }` check is always `true` at this point. immediately above we already checked `if (this.saveDebounceTimer !== null || this.pendingSave !== null)` and only fell through (without `continue`) when both are `null`. the guard is not wrong, but it obscures the loop exit condition and will confuse future readers. a simple `return` (or `return` with comment) communicates the intent more clearly:

```suggestion
			if (flushSaveError) {
				throw flushSaveError;
			}
			return;
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: lib/accounts.ts
Line: 1120-1130

Comment:
**`saveRetryNeeded = true` set before the debounce fires — concurrent `flushPendingSave` caller sees stale flag**

`this.saveRetryNeeded = true` is set at the top of `saveToDiskDebounced`, before the debounce timer fires. if `flushPendingSave` is called while the timer is still armed and `pendingSave` is null, it cancels the timer (`hadDebouncedSave = true`) and then checks `!hadDebouncedSave && !this.saveRetryNeeded`. since `saveRetryNeeded` is already `true`, this correctly triggers a flush save. that part is fine.

however, if `flushPendingSave` is NOT called and the debounce fires, runs the save, and **succeeds**, `saveRetryNeeded` gets cleared to `false` in `enqueueSave`. a subsequent call to `flushPendingSave` (with nothing pending and `saveRetryNeeded = false`) will return without writing. that is the correct behavior.

the one blind spot: a **second** rapid `saveToDiskDebounced` call during an in-flight save sets `saveRetryNeeded = true` again — but if the in-flight save was already started with the correct (latest) state, the second debounce fires a redundant write after the in-flight save completes. this is safe (idempotent extra write), but worth a comment clarifying that `saveRetryNeeded` is an optimistic "at-least-once" signal, not a "new data exists" flag, to avoid confusion during future maintenance.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: index.ts
Line: 211-232

Comment:
**`accountManagerCleanupHook ??= ...` initialized once but re-registered on every plugin instantiation**

`accountManagerCleanupHook` is a module-level variable initialized via `??=`, so only the first plugin instance creates the hook closure. every subsequent instantiation still calls `unregisterCleanup(accountManagerCleanupHook)` then `registerCleanup(accountManagerCleanupHook)`, which re-appends the same function reference to the cleanup list — at the cost of a redundant array splice + push on every reload.

this is harmless today (and the comment correctly explains the intent), but if the `??=` pattern ever silently breaks (e.g., an ES-module isolation change that resets module state between reloads), the hook could end up deregistered but `cleanupFunctions` would show it as absent, so `removeShutdownHandlers()` could fire and leave the process unprotected. a small defensive assertion or log on second registration would make the invariant observable without restructuring the logic.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: b979b68

Context used:

  • Rule used - What: Every code change must explain how it defend... (source)

numman-ali and others added 30 commits November 27, 2025 12:57
- Fix orphaned function_call_output errors during summary/compaction
- Add GitHub API rate limit fallback for Codex instructions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
**Fixed**:
- Compaction losing context: Only filter orphaned function_call_output items, preserve matched pairs
- Agent creation failing: Properly detect streaming vs non-streaming requests
- SSE/JSON response handling: Convert SSE→JSON for generateText(), passthrough for streamText()

**Added**:
- gpt-5.1-chat-latest model support (normalizes to gpt-5.1)

**Technical**:
- Capture original stream value before transformation
- API always gets stream=true, but response handling based on original intent
- Orphan detection: only remove function_call_output without matching function_call

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
## Added
- GPT 5.2 model family with 4 reasoning presets (low/medium/high/xhigh)
- Full image input support for all 16 models via modalities config
- GPT 5.2 model family in codex.ts with dedicated prompt handling

## Changed
- Model ordering: GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1
- Removed default presets without explicit reasoning suffix
- Updated @opencode-ai/plugin to ^1.0.150 (audit fix)
- Test script now uses local dist and includes GPT 5.2 tests

## Tests
- 193 unit tests + 16 integration tests (all passing)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…fix, and HTML version update

This release combines fixes and features from PRs #62, #63, and #64:

### Added
- "none" reasoning effort support for GPT-5.2 and GPT-5.1 general purpose models
- gpt-5.2-none and gpt-5.1-none model mappings and presets (now 18 total models)
- 4 new unit tests for "none" reasoning behavior (197 total tests)

### Fixed
- Orphaned function_call_output 400 API errors - now converts orphans to assistant
  messages to preserve context while avoiding API errors
- OAuth HTML version display updated from 1.0.4 to 4.1.0

### Technical Details
- getReasoningConfig() detects GPT-5.1 general purpose models and allows "none"
- Codex variants auto-convert "none" to "low" (or "medium" for Codex Mini)
- Orphan handling now works regardless of tools presence in request

Contributors: @code-yeongyu (PR #63), @kanemontreuil (PR #64), @ben-vargas (PR #62)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auth reliability + error handling alignment + request integrity fixes
Remap activeIndex based on the selected account key to avoid unexpected account switching when deduplicating stored accounts. Add regression tests for storage normalization and oauth-success copy, and refresh docs/CI labeling.
Rename the npm package/bin to opencode-openai-codex-auth-multiaccount and repoint installer/config/docs to the fork repo. This keeps upstream lean while preserving the multi-account rotation feature set.
Rename package to opencode-openai-codex-auth-multi
@ndycode ndycode requested a review from Copilot March 15, 2026 08:02
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR changes the account-rotation/auth-failure behavior so same-refresh-token account variants are disabled with a reason instead of being removed, and hardens persistence/shutdown flows to reduce state-loss during overlapping saves and process termination.

Changes:

  • Replace grouped account removal with enabled: false + disabledReason: "auth-failure" and revive only auth-failure-disabled accounts on explicit login/restore.
  • Serialize account saves (enqueueSave) and improve flushPendingSave to recover from older write failures while draining queued saves.
  • Add shutdown cleanup hooks to flush pending saves and normalize storage shapes (omit unset optional fields, normalize invalid disabled reasons).

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
lib/accounts.ts Introduces disabling by refresh token, trySetAccountEnabled, save serialization, and iterative flush logic.
index.ts Switches to disable/revive semantics, tracks managers for shutdown flushing, and blocks manual re-enable of auth-failure disables.
lib/shutdown.ts Makes cleanup idempotent and keeps signal handlers active through async cleanup.
lib/storage.ts Normalizes enabled / disabledReason on load; strips stale metadata to match storage conventions.
lib/storage/migrations.ts Adds AccountDisabledReason types + normalization helpers; v1→v3 migration defaults disabled accounts to "user".
lib/schemas.ts Adds lenient DisabledReasonSchema preprocessing to avoid schema warnings on unknown reasons.
lib/constants.ts Renames max auth failure constant to reflect disabling.
test/accounts.test.ts Adds coverage for new disabling, enable/disable API, queued saves, and flush behaviors (incl. EBUSY).
test/index.test.ts Adds coverage for disabled-pool messaging, revival semantics, shutdown flush, and manage-menu blocking.
test/index-retry.test.ts Updates mocked manager interface for enabled-count selection.
test/shutdown.test.ts Expands coverage for handler installation, idempotency, and cleanup draining.
test/storage.test.ts Adds migration/normalization coverage for enabled omission and disabledReason normalization/stripping.
test/schemas.test.ts Verifies schema-level normalization of unknown disabled reasons and no validation warnings.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread lib/accounts.ts Outdated
Comment thread lib/shutdown.ts Outdated
Comment thread lib/storage/migrations.ts
Comment thread index.ts
Comment thread index.ts
@ndycode ndycode requested a review from Copilot March 15, 2026 08:40
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR changes account-auth failure handling to disable (and persist reasons for) same-refresh-token account groups instead of removing them, and strengthens shutdown/save reliability so disabled state is less likely to be lost.

Changes:

  • Introduces disabledReason (user | auth-failure) and normalizes/migrates storage to preserve “omit unset optional fields” while keeping legacy disabled accounts treated as user-disabled.
  • Switches auth refresh failure handling from group removal to group disable + improved user messaging and explicit-login revival semantics.
  • Serializes account saves, adds shutdown-time flush of pending debounced saves, and hardens shutdown cleanup behavior (signals, reentrancy, timeouts), with expanded test coverage.

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
test/storage.test.ts Adds migration/normalization tests for enabled + disabledReason, including flagged-account load behavior.
test/shutdown.test.ts Expands shutdown behavior tests (reentrancy, signals during cleanup, timeout behavior, handler registration).
test/schemas.test.ts Validates schema behavior for unknown/invalid disabledReason values without warnings.
test/index.test.ts Updates plugin tests for disabled-pool messaging, grouped disable behavior, manager flushing, and manage-menu rules.
test/index-retry.test.ts Aligns retry tests with enabled-account counting and shutdown cleanup isolation.
test/accounts.test.ts Adds extensive AccountManager tests for disable reasons, grouped disabling, and save/flush serialization.
lib/storage/migrations.ts Adds AccountDisabledReason utilities and V1→V3 migration defaults for legacy disabled accounts.
lib/storage.ts Normalizes persisted enabled and disabledReason fields, and normalizes flagged storage accordingly.
lib/shutdown.ts Makes cleanup re-entrant, installs/removes signal handlers, adds timeout-forced exit on stalled cleanup.
lib/schemas.ts Adds lenient DisabledReasonSchema preprocessing for storage compatibility.
lib/constants.ts Renames auth-failure threshold constant to disabling semantics.
lib/accounts.ts Implements grouped disable, reason-aware enable/disable API, and queued/flushable serialized saves.
index.ts Uses enabled-account counts in selection, disables groups on auth failures, revives only auth-failure disables on login, and flushes pending saves on shutdown.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread lib/shutdown.ts
Comment thread index.ts
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="test/index.test.ts">

<violation number="1" location="test/index.test.ts:2340">
P3: Add hasPendingSave/flushPendingSave/waitForPendingSaveToSettle stubs to this customManager mock; otherwise the shutdown cleanup treats it as having pending saves and calls flushPendingSave(), which is undefined, causing a teardown TypeError warning and leaving the flush path untested.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread test/index.test.ts
@ndycode
Copy link
Copy Markdown
Owner Author

ndycode commented Mar 15, 2026

@coderabbitai review

@cubic-dev-ai please re-run review on the latest commit b257c2a.

@ndycode ndycode requested a review from Copilot March 15, 2026 13:56
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 15, 2026

@ndycode Sure! I'll kick off a new review of the latest commit b257c2a now.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@ndycode
Copy link
Copy Markdown
Owner Author

ndycode commented Mar 15, 2026

@coderabbitai resume

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR keeps account groups visible after repeated auth refresh failures by disabling them (with a persisted reason) instead of removing them, while also hardening persistence by serializing saves and flushing debounced writes during shutdown.

Changes:

  • Replace “remove after auth failure” with “disable and persist disabledReason”, and revive only auth-failure disables after explicit login/restore flows.
  • Serialize disk writes and improve shutdown cleanup to flush pending/debounced saves with retry behavior for transient Windows file locks.
  • Normalize storage parsing/migration for enabled/disabledReason while preserving the “omit unset optional fields” storage shape.

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
lib/accounts.ts Adds disable-with-reason logic and serialized save/flush machinery.
lib/constants.ts Renames the auth-failure threshold const to reflect disabling behavior.
lib/shutdown.ts Reworks signal/beforeExit handling to drain cleanup and force-exit after a timeout.
lib/storage/migrations.ts Introduces AccountDisabledReason and normalization helpers; updates v1→v3 migration.
lib/storage.ts Normalizes persisted enabled/disabledReason fields for accounts and flagged accounts.
lib/schemas.ts Adds a lenient schema for disabledReason to avoid rejecting legacy/unknown values.
index.ts Switches to disable flow, adds revive-on-login option, updates selection UX, and adds shutdown flush hook.
test/accounts.test.ts Adds coverage for disable reasons, blocked re-enable, and queued/flush save behavior.
test/index.test.ts Updates plugin behavior tests for disabled pools, revival, toasts, and shutdown flush.
test/index-retry.test.ts Adjusts mocks for enabled-account counting.
test/schemas.test.ts Covers schema normalization behavior for unknown disabledReason values.
test/shutdown.test.ts Expands shutdown handler/cleanup state tests (signals, beforeExit, timeout).
test/storage.test.ts Adds migration/normalization tests for enabled/disabledReason and flagged records.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread lib/shutdown.ts
Comment on lines +101 to 112
let timeoutHandle: ReturnType<typeof setTimeout> | null = setTimeout(() => {
requestExit();
}, SIGNAL_CLEANUP_TIMEOUT_MS);
void runCleanup().finally(() => {
try {
requestExit();
} finally {
// `process.exit()` normally terminates immediately. This only runs when exit
// is intercepted (for example in tests), so we need to unlatch signal state.
resetSignalStateAfterInterceptedExit();
}
});
Comment thread index.ts
Comment on lines +3618 to +3621
const toastShown = await showToast(message, "warning");
if (!toastShown) {
console.log("\nRun 'opencode auth login' to re-enable this account.\n");
}
Comment thread index.ts
Comment on lines +1715 to +1722
const waitForSettle = manager
.waitForPendingSaveToSettle()
.finally(() => {
trackedManagerSettledWaits.delete(manager);
if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
trackedAccountManagersForCleanup.delete(manager);
}
});
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/storage.ts (1)

1003-1037: ⚠️ Potential issue | 🟠 Major

Legacy blocked-account migration loses auth-failure provenance here.

loadFlaggedAccounts() also routes openai-codex-blocked-accounts.json through this normalizer. Those records predate disabledReason, so this branch currently drops the auth-failure marker the new revive rules rely on; if a legacy record also omitted enabled, it can lose the disabled state too. This path should stamp legacy blocked entries as enabled: false / disabledReason: "auth-failure" instead of sending them through the generic normalization unchanged. Using normalizeStoredEnabled() here would also keep flagged storage on the same omit-when-enabled format as main storage.

Possible direction
-		const disabledReason = normalizeStoredAccountDisabledReason(
-			rawAccount.enabled,
-			rawAccount.disabledReason,
-		);
+		const enabled = normalizeStoredEnabled(rawAccount.enabled);
+		const disabledReason = normalizeStoredAccountDisabledReason(
+			rawAccount.enabled,
+			rawAccount.disabledReason,
+		);
@@
-			enabled: typeof rawAccount.enabled === "boolean" ? rawAccount.enabled : undefined,
+			enabled,
// In the legacy `openai-codex-blocked-accounts.json` migration path:
const migrated = normalizeFlaggedStorage(legacyData);
for (const account of migrated.accounts) {
  account.enabled ??= false;
  account.disabledReason ??= "auth-failure";
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/storage.ts` around lines 1003 - 1037, The normalization currently drops
legacy auth-failure provenance for records routed through loadFlaggedAccounts()
by not stamping missing enabled/disabledReason values; update the normalization
so that after computing disabledReason via
normalizeStoredAccountDisabledReason(rawAccount.enabled,
rawAccount.disabledReason) you set enabled and disabledReason defaults for
legacy blocked entries: if disabledReason is undefined and rawAccount.enabled is
undefined (legacy path), set normalized.enabled = normalizeStoredEnabled(false)
(or otherwise set enabled = false using normalizeStoredEnabled convention) and
normalized.disabledReason = "auth-failure"; prefer calling
normalizeStoredEnabled() rather than hard-omitting to keep flagged storage
format consistent. Ensure this change is applied where FlaggedAccountMetadataV1
is built (the block using normalizeStoredAccountDisabledReason and producing
normalized) so legacy openai-codex-blocked-accounts.json records retain
auth-failure provenance.
♻️ Duplicate comments (1)
lib/accounts.ts (1)

986-1049: ⚠️ Potential issue | 🟠 Major

Keep the public save path inside the serialized lane.

enqueueSave() only protects callers that opt into it. Because saveToDisk() on Line 986 is still the raw writer, any direct caller can start a write that pendingSave and flushPendingSave() never observe, reopening the overlap/shutdown gap this queue is supposed to close.

Proposed fix
-	async saveToDisk(): Promise<void> {
+	async saveToDisk(): Promise<void> {
+		return this.enqueueSave(() => this.persistToDisk());
+	}
+
+	private async persistToDisk(): Promise<void> {
 		const activeIndexByFamily: Partial<Record<ModelFamily, number>> = {};
 		for (const family of MODEL_FAMILIES) {
 			const raw = this.currentAccountIndexByFamily[family];
@@
-		await saveAccounts(storage);
+		await saveAccounts(storage);
 	}

Then switch internal queued call sites to this.saveToDisk() or this.persistToDisk() so there is only one raw write path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/accounts.ts` around lines 986 - 1049, The public save path must go
through the queue to avoid unobserved writes: make the raw disk writer
(currently saveToDisk) an internal/private method (rename to persistToDisk or
mark private) and create a single public method saveToDisk that calls
enqueueSave(() => this.persistToDisk()), then update all internal call sites to
call this.saveToDisk() (or this.persistToDisk() for the internal implementation)
so every write is funneled via enqueueSave and observed by
pendingSave/flushPendingSave.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@index.ts`:
- Around line 1727-1754: Detached account managers are being removed from
trackedAccountManagersForCleanup too eagerly, which can drop a manager that
later enqueues its first save; in setCachedAccountManager and
invalidateAccountManagerCache stop deleting the detached manager from
trackedAccountManagersForCleanup when managerHasPendingSave(...) is
false—instead always keep the detached manager in
trackedAccountManagersForCleanup (and call
scheduleTrackedManagerPrune(cachedAccountManager) only when it has pending
saves) until an explicit “no more users”/finalized signal removes it; update
logic around cachedAccountManager, activeAccountManagersForCleanup,
trackedAccountManagersForCleanup, pruneTrackedAccountManagersForCleanup,
scheduleTrackedManagerPrune and accountManagerPromise accordingly.
- Around line 3609-3619: The logs in the shouldEnable branch leak a raw account
identifier via the accountId field in logWarn and logInfo; update the handlers
around shouldEnable / target.disabledReason === "auth-failure" to stop emitting
target.accountId directly—either remove the accountId property or replace it
with a redacted value/short suffix (e.g., last 4 chars) before calling logWarn
and logInfo (the surrounding symbols are shouldEnable, target.disabledReason,
target.accountId, logWarn, logInfo, and showToast); ensure showToast behavior is
unchanged but that logging no longer includes the full raw identifier.
- Around line 1715-1724: The finalizer currently deletes
trackedManagerSettledWaits for manager even when a new save started
concurrently, so the manager can be left un-pruned; change the finally callback
on waitForSettle (the Promise from manager.waitForPendingSaveToSettle()) to
re-arm a new waiter if work is still pending: after deleting
trackedManagerSettledWaits.delete(manager) and before possibly removing from
trackedAccountManagersForCleanup, check if managerHasPendingSave(manager) (or
activeAccountManagersForCleanup.has(manager) as appropriate) and if so create a
fresh promise by calling manager.waitForPendingSaveToSettle() and store it back
into trackedManagerSettledWaits.set(manager, newWait) so the manager will get
another prune pass; ensure you use the same trackedManagerSettledWaits and
managerHasPendingSave/activeAccountManagersForCleanup symbols in this logic.

In `@lib/accounts.ts`:
- Around line 947-975: The code currently clears cooldown metadata
unconditionally when handling enabled=true in trySetAccountEnabled; change it to
only clear cooldown when performing a real re-enable transition (account.enabled
was false and is being set to true). In trySetAccountEnabled, capture the
account's prior enabled state (e.g., const wasEnabled = account.enabled) before
mutating, then only call this.clearAccountCooldown(account) and delete
account.disabledReason when enabled && !wasEnabled (preserving the auth-failure
and other branches as-is).

In `@test/index.test.ts`:
- Around line 3667-3782: Update the assertion for the legacy-disabled account to
match the normalized storage contract: for the revived entry with accountId
"org-legacy" (found from revivedEntries), assert that enabled is false and
disabledReason is "user" (instead of expecting undefined), and retain the checks
for coolingDownUntil and cooldownReason; this ensures the test verifies the
legacy-disable migration normalizes undefined reason to "user" for entries where
enabled === false.

---

Outside diff comments:
In `@lib/storage.ts`:
- Around line 1003-1037: The normalization currently drops legacy auth-failure
provenance for records routed through loadFlaggedAccounts() by not stamping
missing enabled/disabledReason values; update the normalization so that after
computing disabledReason via
normalizeStoredAccountDisabledReason(rawAccount.enabled,
rawAccount.disabledReason) you set enabled and disabledReason defaults for
legacy blocked entries: if disabledReason is undefined and rawAccount.enabled is
undefined (legacy path), set normalized.enabled = normalizeStoredEnabled(false)
(or otherwise set enabled = false using normalizeStoredEnabled convention) and
normalized.disabledReason = "auth-failure"; prefer calling
normalizeStoredEnabled() rather than hard-omitting to keep flagged storage
format consistent. Ensure this change is applied where FlaggedAccountMetadataV1
is built (the block using normalizeStoredAccountDisabledReason and producing
normalized) so legacy openai-codex-blocked-accounts.json records retain
auth-failure provenance.

---

Duplicate comments:
In `@lib/accounts.ts`:
- Around line 986-1049: The public save path must go through the queue to avoid
unobserved writes: make the raw disk writer (currently saveToDisk) an
internal/private method (rename to persistToDisk or mark private) and create a
single public method saveToDisk that calls enqueueSave(() =>
this.persistToDisk()), then update all internal call sites to call
this.saveToDisk() (or this.persistToDisk() for the internal implementation) so
every write is funneled via enqueueSave and observed by
pendingSave/flushPendingSave.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9fddc73e-c8f2-49b7-992e-be65efbaa709

📥 Commits

Reviewing files that changed from the base of the PR and between 2b851cc and b257c2a.

📒 Files selected for processing (11)
  • index.ts
  • lib/accounts.ts
  • lib/schemas.ts
  • lib/shutdown.ts
  • lib/storage.ts
  • lib/storage/migrations.ts
  • test/accounts.test.ts
  • test/index.test.ts
  • test/schemas.test.ts
  • test/shutdown.test.ts
  • test/storage.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/schemas.ts

Comment thread index.ts
Comment on lines +1715 to +1724
const waitForSettle = manager
.waitForPendingSaveToSettle()
.finally(() => {
trackedManagerSettledWaits.delete(manager);
if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
trackedAccountManagersForCleanup.delete(manager);
}
});

trackedManagerSettledWaits.set(manager, waitForSettle);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Re-arm the stale-manager waiter when work is still pending.

The finalizer clears trackedManagerSettledWaits, but if a new save is already pending at that point the manager never gets another prune pass and can sit in trackedAccountManagersForCleanup until shutdown.

♻️ Proposed fix
 			const waitForSettle = manager
 				.waitForPendingSaveToSettle()
 				.finally(() => {
 					trackedManagerSettledWaits.delete(manager);
-					if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
-						trackedAccountManagersForCleanup.delete(manager);
-					}
+					if (activeAccountManagersForCleanup.has(manager)) {
+						return;
+					}
+					if (managerHasPendingSave(manager)) {
+						scheduleTrackedManagerPrune(manager);
+						return;
+					}
+					trackedAccountManagersForCleanup.delete(manager);
 				});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@index.ts` around lines 1715 - 1724, The finalizer currently deletes
trackedManagerSettledWaits for manager even when a new save started
concurrently, so the manager can be left un-pruned; change the finally callback
on waitForSettle (the Promise from manager.waitForPendingSaveToSettle()) to
re-arm a new waiter if work is still pending: after deleting
trackedManagerSettledWaits.delete(manager) and before possibly removing from
trackedAccountManagersForCleanup, check if managerHasPendingSave(manager) (or
activeAccountManagersForCleanup.has(manager) as appropriate) and if so create a
fresh promise by calling manager.waitForPendingSaveToSettle() and store it back
into trackedManagerSettledWaits.set(manager, newWait) so the manager will get
another prune pass; ensure you use the same trackedManagerSettledWaits and
managerHasPendingSave/activeAccountManagersForCleanup symbols in this logic.

Comment thread index.ts
Comment thread index.ts
Comment thread lib/accounts.ts
Comment thread test/index.test.ts
Comment on lines +3667 to +3782
it("preserves user-disabled and legacy-disabled variants while reviving auth-failure disables on explicit login", async () => {
const accountsModule = await import("../lib/accounts.js");
const authModule = await import("../lib/auth/auth.js");

vi.mocked(authModule.exchangeAuthorizationCode)
.mockResolvedValueOnce({
type: "success",
access: "access-no-org-1",
refresh: "refresh-shared",
expires: Date.now() + 300_000,
idToken: "id-no-org-1",
})
.mockResolvedValueOnce({
type: "success",
access: "access-no-org-2",
refresh: "refresh-shared",
expires: Date.now() + 300_000,
idToken: "id-no-org-2",
});
vi.mocked(accountsModule.getAccountIdCandidates).mockReturnValue([]);
vi.mocked(accountsModule.extractAccountId)
.mockReturnValueOnce("account-1")
.mockReturnValueOnce(undefined);

const mockClient = createMockClient();
const { OpenAIOAuthPlugin } = await import("../index.js");
const plugin = await OpenAIOAuthPlugin({ client: mockClient } as never) as unknown as PluginType;
const autoMethod = plugin.auth.methods[0] as unknown as {
authorize: (inputs?: Record<string, string>) => Promise<{ instructions: string }>;
};

await autoMethod.authorize({ loginMode: "add", accountCount: "1" });
await autoMethod.authorize({ loginMode: "add", accountCount: "1" });

expect(mockStorage.accounts).toHaveLength(1);
mockStorage.accounts = [
{
accountId: "unrelated-disabled",
organizationId: "org-unrelated",
email: "unrelated@example.com",
refreshToken: "different-refresh",
enabled: false,
disabledReason: "user",
addedAt: 9,
lastUsed: 9,
},
{
accountId: "org-shared",
organizationId: "org-keep",
email: "org@example.com",
refreshToken: "shared-refresh",
enabled: false,
disabledReason: "auth-failure",
coolingDownUntil: 12_000,
cooldownReason: "auth-failure",
addedAt: 10,
lastUsed: 10,
},
{
accountId: "org-sibling",
organizationId: "org-sibling",
email: "sibling@example.com",
refreshToken: "shared-refresh",
enabled: false,
disabledReason: "user",
addedAt: 11,
lastUsed: 11,
},
{
accountId: "org-legacy",
organizationId: "org-legacy",
email: "legacy@example.com",
refreshToken: "shared-refresh",
enabled: false,
coolingDownUntil: 18_000,
cooldownReason: "auth-failure",
addedAt: 12,
lastUsed: 12,
},
];

vi.mocked(authModule.exchangeAuthorizationCode).mockResolvedValueOnce({
type: "success",
access: "access-shared-refresh",
refresh: "shared-refresh",
expires: Date.now() + 300_000,
idToken: "id-shared-refresh",
});
vi.mocked(accountsModule.getAccountIdCandidates).mockReturnValueOnce([]);
vi.mocked(accountsModule.extractAccountId).mockImplementation((accessToken) =>
accessToken === "access-shared-refresh" ? "org-shared" : "account-1",
);

const mockClient = createMockClient();
const { OpenAIOAuthPlugin } = await import("../index.js");
const plugin = (await OpenAIOAuthPlugin({
client: mockClient,
} as never)) as unknown as PluginType;
const autoMethod = plugin.auth.methods[0] as unknown as {
authorize: (inputs?: Record<string, string>) => Promise<{ instructions: string }>;
};

await autoMethod.authorize({ loginMode: "add", accountCount: "1" });

const revivedEntries = mockStorage.accounts.filter(
(account) => account.refreshToken === "shared-refresh",
);
expect(revivedEntries).toHaveLength(3);
expect(
revivedEntries.find((account) => account.accountId === "org-shared"),
).toMatchObject({
accountId: "org-shared",
enabled: undefined,
disabledReason: undefined,
coolingDownUntil: undefined,
cooldownReason: undefined,
});
expect(
revivedEntries.find((account) => account.accountId === "org-sibling"),
).toMatchObject({
accountId: "org-sibling",
enabled: false,
disabledReason: "user",
});
expect(
revivedEntries.find((account) => account.accountId === "org-legacy"),
).toMatchObject({
accountId: "org-legacy",
enabled: false,
coolingDownUntil: 18_000,
cooldownReason: "auth-failure",
});
expect(
revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
).toBeUndefined();
expect(
revivedEntries.find((account) => account.accountId === "org-shared")?.accessToken,
).toBe("access-shared-refresh");
expect(
mockStorage.accounts.find((account) => account.accountId === "unrelated-disabled"),
).toMatchObject({
accountId: "unrelated-disabled",
refreshToken: "different-refresh",
enabled: false,
disabledReason: "user",
});
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the legacy-disabled expectation aligned with normalized storage.

Line 3769 currently expects org-legacy to keep disabledReason === undefined, but this PR’s normalization contract treats enabled: false without a reason as "user". Leaving the assertion as-is makes the test pass on a state production should already have canonicalized, so it can miss regressions in the legacy-disable migration path.

Proposed fix
 		mockStorage.accounts = [
@@
 			{
 				accountId: "org-legacy",
 				organizationId: "org-legacy",
 				email: "legacy@example.com",
 				refreshToken: "shared-refresh",
 				enabled: false,
+				disabledReason: "user",
 				coolingDownUntil: 18_000,
 				cooldownReason: "auth-failure",
 				addedAt: 12,
 				lastUsed: 12,
 			},
@@
-		expect(
-			revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
-		).toBeUndefined();
+		expect(
+			revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
+		).toBe("user");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` around lines 3667 - 3782, Update the assertion for the
legacy-disabled account to match the normalized storage contract: for the
revived entry with accountId "org-legacy" (found from revivedEntries), assert
that enabled is false and disabledReason is "user" (instead of expecting
undefined), and retain the checks for coolingDownUntil and cooldownReason; this
ensures the test verifies the legacy-disable migration normalizes undefined
reason to "user" for entries where enabled === false.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 15, 2026

✅ Actions performed

Reviews resumed.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
lib/accounts.ts (1)

997-1036: ⚠️ Potential issue | 🟠 Major

saveToDisk() still bypasses the serialized save lane.

pendingSave is only set through enqueueSave, but saveToDisk() remains a direct writer. Any direct call can overlap queued writes, and flushPendingSave() may conclude without waiting for that write path.

🔧 Refactor direction
- async saveToDisk(): Promise<void> {
+ private async writeToDisk(): Promise<void> {
   // existing storage serialization + saveAccounts(...)
 }

+ async saveToDisk(): Promise<void> {
+   return this.enqueueSave(() => this.writeToDisk());
+ }

Then replace explicit queue wrappers that currently call this.saveToDisk() (e.g., Line 287, Line 1121, Line 1192) with direct this.saveToDisk() calls to avoid double-queuing.

Also applies to: 1038-1060

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/accounts.ts` around lines 997 - 1036, saveToDisk currently writes
directly and can race with the queued save lane because pendingSave is only set
via enqueueSave; fix this by funneling all disk writes through the serialized
save path: make saveToDisk an internal writer (or otherwise stop calling it
directly), update every direct caller to call enqueueSave() instead (so
pendingSave is set and the queue serializes writes), and ensure flushPendingSave
awaits the pendingSave promise before returning; refer to the methods
saveToDisk, enqueueSave, pendingSave, flushPendingSave and the downstream
saveAccounts call when making these changes.
index.ts (1)

1733-1760: ⚠️ Potential issue | 🟠 Major

Detached managers can still fall out of shutdown tracking too early.

At Line 1741 and Line 1759, detached managers are removed from trackedAccountManagersForCleanup when they are idle at detach time. In-flight code paths can still enqueue a first debounced save on that detached instance later, which would then be missed by shutdown flushing. Keep detached managers tracked until a true finalization boundary.

🔧 Suggested fix
 const setCachedAccountManager = (manager: AccountManager): AccountManager => {
-	pruneTrackedAccountManagersForCleanup();
 	if (cachedAccountManager && cachedAccountManager !== manager) {
 		activeAccountManagersForCleanup.delete(cachedAccountManager);
-		if (managerHasPendingSave(cachedAccountManager)) {
-			trackedAccountManagersForCleanup.add(cachedAccountManager);
-			scheduleTrackedManagerPrune(cachedAccountManager);
-		} else {
-			trackedAccountManagersForCleanup.delete(cachedAccountManager);
-		}
+		trackedAccountManagersForCleanup.add(cachedAccountManager);
+		if (managerHasPendingSave(cachedAccountManager)) {
+			scheduleTrackedManagerPrune(cachedAccountManager);
+		}
 	}
 	activeAccountManagersForCleanup.add(manager);
 	trackedAccountManagersForCleanup.delete(manager);
 	cachedAccountManager = manager;
 	accountManagerPromise = Promise.resolve(manager);
 	return manager;
 };

 const invalidateAccountManagerCache = (): void => {
-	pruneTrackedAccountManagersForCleanup();
 	if (cachedAccountManager) {
 		activeAccountManagersForCleanup.delete(cachedAccountManager);
-		if (managerHasPendingSave(cachedAccountManager)) {
-			trackedAccountManagersForCleanup.add(cachedAccountManager);
-			scheduleTrackedManagerPrune(cachedAccountManager);
-		} else {
-			trackedAccountManagersForCleanup.delete(cachedAccountManager);
-		}
+		trackedAccountManagersForCleanup.add(cachedAccountManager);
+		if (managerHasPendingSave(cachedAccountManager)) {
+			scheduleTrackedManagerPrune(cachedAccountManager);
+		}
 	}
 	cachedAccountManager = null;
 	accountManagerPromise = null;
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@index.ts` around lines 1733 - 1760, The detach logic in
setCachedAccountManager and invalidateAccountManagerCache prematurely removes
detached managers from trackedAccountManagersForCleanup when they're idle (lines
around managerHasPendingSave branches), which allows in-flight debounced saves
to be missed; update both functions (setCachedAccountManager and
invalidateAccountManagerCache) to never call
trackedAccountManagersForCleanup.delete(cachedAccountManager) for a detached
manager — instead, if managerHasPendingSave(cachedAccountManager) add it and
scheduleTrackedManagerPrune(cachedAccountManager), otherwise still keep it
tracked (do not delete) so it remains part of shutdown flushing until a true
finalization boundary; keep activeAccountManagersForCleanup.delete and other
existing behavior but remove the delete-from-tracked calls and ensure tracked
managers stay present until finalized.
test/index.test.ts (1)

3705-3714: ⚠️ Potential issue | 🟡 Minor

Align legacy-disabled assertion with normalized storage behavior.

Line 3709 sets a legacy disabled account (enabled: false) without disabledReason, but Line 3769 still expects disabledReason to remain undefined. That conflicts with this PR’s normalization rule (enabled: false + missing reason ⇒ "user"), so the test currently validates non-canonical state.

Proposed fix
 		{
 			accountId: "org-legacy",
 			organizationId: "org-legacy",
 			email: "legacy@example.com",
 			refreshToken: "shared-refresh",
 			enabled: false,
+			disabledReason: "user",
 			coolingDownUntil: 18_000,
 			cooldownReason: "auth-failure",
 			addedAt: 12,
 			lastUsed: 12,
 		},
@@
 		expect(
 			revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
-		).toBeUndefined();
+		).toBe("user");

Also applies to: 3768-3770

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` around lines 3705 - 3714, The test sets a legacy disabled
account object (accountId "org-legacy") with enabled: false but no
disabledReason, yet later assertions still expect disabledReason to be
undefined; update the test to match the normalization rule by expecting
disabledReason === "user" for that account (or alternatively add disabledReason:
"user" to the fixture), and apply the same change to the other duplicate
assertion block referenced around the second occurrence (the one around lines
3768-3770) so both the fixture and assertion align with the canonical
normalization behavior.
🧹 Nitpick comments (1)
test/index.test.ts (1)

2121-2121: Prefer typed stub contracts with satisfies over as never for AccountManager mocks.

The as never casts disable type checking for partial mock implementations. Since AccountManager includes flushPendingSave, waitForPendingSaveToSettle, and other public methods not present in these test stubs, new API additions won't cause compilation errors, risking silent interface drift. Create a typed AccountManager stub contract and use satisfies to maintain compile-time verification of mock completeness.

Also applies to: 2284, 2385, 2463, 2910

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` at line 2121, Replace the unsafe "as never" cast on the
mocked AccountManager with a typed stub that uses TypeScript's "satisfies" to
ensure the stub conforms to AccountManager; specifically, create a minimal stub
object (e.g., customManager) typed as a partial but checked with "satisfies
AccountManager" and include the public methods referenced by the real interface
(flushPendingSave, waitForPendingSaveToSettle, etc.), then pass that stub into
vi.spyOn(AccountManager, "loadFromDisk").mockResolvedValueOnce(...) so the
compiler will catch missing or changed public methods instead of using "as
never".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/shutdown.test.ts`:
- Around line 438-451: Wrap the process.on spy setup/teardown in a try/finally
to guarantee restoration: when creating the spy (processOnSpy =
vi.spyOn(process, "on")...) and importing registerCleanup/freshRegister, run the
test assertions inside a try block and call processOnSpy.mockRestore() in the
finally block so the spy is always restored even if expectations fail; apply the
same change to the other test block around freshRegister usage (the block
covering the similar assertions at 453-473).

---

Duplicate comments:
In `@index.ts`:
- Around line 1733-1760: The detach logic in setCachedAccountManager and
invalidateAccountManagerCache prematurely removes detached managers from
trackedAccountManagersForCleanup when they're idle (lines around
managerHasPendingSave branches), which allows in-flight debounced saves to be
missed; update both functions (setCachedAccountManager and
invalidateAccountManagerCache) to never call
trackedAccountManagersForCleanup.delete(cachedAccountManager) for a detached
manager — instead, if managerHasPendingSave(cachedAccountManager) add it and
scheduleTrackedManagerPrune(cachedAccountManager), otherwise still keep it
tracked (do not delete) so it remains part of shutdown flushing until a true
finalization boundary; keep activeAccountManagersForCleanup.delete and other
existing behavior but remove the delete-from-tracked calls and ensure tracked
managers stay present until finalized.

In `@lib/accounts.ts`:
- Around line 997-1036: saveToDisk currently writes directly and can race with
the queued save lane because pendingSave is only set via enqueueSave; fix this
by funneling all disk writes through the serialized save path: make saveToDisk
an internal writer (or otherwise stop calling it directly), update every direct
caller to call enqueueSave() instead (so pendingSave is set and the queue
serializes writes), and ensure flushPendingSave awaits the pendingSave promise
before returning; refer to the methods saveToDisk, enqueueSave, pendingSave,
flushPendingSave and the downstream saveAccounts call when making these changes.

In `@test/index.test.ts`:
- Around line 3705-3714: The test sets a legacy disabled account object
(accountId "org-legacy") with enabled: false but no disabledReason, yet later
assertions still expect disabledReason to be undefined; update the test to match
the normalization rule by expecting disabledReason === "user" for that account
(or alternatively add disabledReason: "user" to the fixture), and apply the same
change to the other duplicate assertion block referenced around the second
occurrence (the one around lines 3768-3770) so both the fixture and assertion
align with the canonical normalization behavior.

---

Nitpick comments:
In `@test/index.test.ts`:
- Line 2121: Replace the unsafe "as never" cast on the mocked AccountManager
with a typed stub that uses TypeScript's "satisfies" to ensure the stub conforms
to AccountManager; specifically, create a minimal stub object (e.g.,
customManager) typed as a partial but checked with "satisfies AccountManager"
and include the public methods referenced by the real interface
(flushPendingSave, waitForPendingSaveToSettle, etc.), then pass that stub into
vi.spyOn(AccountManager, "loadFromDisk").mockResolvedValueOnce(...) so the
compiler will catch missing or changed public methods instead of using "as
never".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 265dced9-8d3c-4123-ae30-7a54a773b6bb

📥 Commits

Reviewing files that changed from the base of the PR and between b257c2a and 31a7427.

📒 Files selected for processing (6)
  • index.ts
  • lib/accounts.ts
  • lib/shutdown.ts
  • test/accounts.test.ts
  • test/index.test.ts
  • test/shutdown.test.ts

Comment thread test/shutdown.test.ts
Comment on lines 438 to +451
it("signal handlers are only registered once", async () => {
const processOnceSpy = vi.spyOn(process, "once").mockImplementation(() => process);
const processOnSpy = vi.spyOn(process, "on").mockImplementation(() => process);

vi.resetModules();
const { registerCleanup: freshRegister } = await import("../lib/shutdown.js");

freshRegister(() => {});
const firstCallCount = processOnceSpy.mock.calls.length;
const firstCallCount = processOnSpy.mock.calls.length;

freshRegister(() => {});
expect(processOnSpy.mock.calls.length).toBe(firstCallCount);

processOnSpy.mockRestore();
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use try/finally for spy restoration in these tests.

If an assertion throws before the trailing mockRestore() calls, process spies leak into later tests and can create misleading failures.

💡 Suggested hardening
 it("signal handlers are only registered once", async () => {
   const processOnSpy = vi.spyOn(process, "on").mockImplementation(() => process);
-
-  vi.resetModules();
-  const { registerCleanup: freshRegister } = await import("../lib/shutdown.js");
-
-  freshRegister(() => {});
-  const firstCallCount = processOnSpy.mock.calls.length;
-
-  freshRegister(() => {});
-  expect(processOnSpy.mock.calls.length).toBe(firstCallCount);
-
-  processOnSpy.mockRestore();
+  try {
+    vi.resetModules();
+    const { registerCleanup: freshRegister } = await import("../lib/shutdown.js");
+    freshRegister(() => {});
+    const firstCallCount = processOnSpy.mock.calls.length;
+    freshRegister(() => {});
+    expect(processOnSpy.mock.calls.length).toBe(firstCallCount);
+  } finally {
+    processOnSpy.mockRestore();
+  }
 });

Also applies to: 453-473

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/shutdown.test.ts` around lines 438 - 451, Wrap the process.on spy
setup/teardown in a try/finally to guarantee restoration: when creating the spy
(processOnSpy = vi.spyOn(process, "on")...) and importing
registerCleanup/freshRegister, run the test assertions inside a try block and
call processOnSpy.mockRestore() in the finally block so the spy is always
restored even if expectations fail; apply the same change to the other test
block around freshRegister usage (the block covering the similar assertions at
453-473).

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/accounts.ts (1)

1117-1131: ⚠️ Potential issue | 🟠 Major

Retain retry intent after debounced save failures

At Line 1125 and Line 1127, debounced save failures are only logged. Once pendingSave settles, hasPendingSave() (Line 1066) becomes false, so a later flushPendingSave() (Line 1134) can return without retrying unsaved state. This can drop critical disable-state persistence after transient EBUSY/lock failures.

💡 Proposed fix (track dirty-save retry state)
 class AccountManager {
+  private saveRetryNeeded = false;
   private saveDebounceTimer: ReturnType<typeof setTimeout> | null = null;
   private pendingSave: Promise<void> | null = null;
@@
   hasPendingSave(): boolean {
-    return this.saveDebounceTimer !== null || this.pendingSave !== null;
+    return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null;
   }
@@
   saveToDiskDebounced(delayMs = 500): void {
+    this.saveRetryNeeded = true;
     if (this.saveDebounceTimer) {
       clearTimeout(this.saveDebounceTimer);
     }
     this.saveDebounceTimer = setTimeout(() => {
       this.saveDebounceTimer = null;
       const doSave = async () => {
         try {
           await this.saveToDisk();
+          this.saveRetryNeeded = false;
         } catch (error) {
+          this.saveRetryNeeded = true;
           log.warn("Debounced save failed", { error: error instanceof Error ? error.message : String(error) });
         }
       };
       void doSave();
     }, delayMs);
   }
@@
   async flushPendingSave(options?: FlushPendingSaveOptions): Promise<void> {
@@
-      const hadDebouncedSave = !!this.saveDebounceTimer;
+      const hadDebouncedSave = this.saveRetryNeeded || !!this.saveDebounceTimer;
@@
       const flushSave = this.saveToDisk();
@@
       if (flushSaveError) {
         throw flushSaveError;
       }
+      this.saveRetryNeeded = false;

Also applies to: 1066-1068, 1134-1221

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/accounts.ts` around lines 1117 - 1131, The debounced save currently logs
failures in saveToDiskDebounced (inside saveDebounceTimer) but doesn't preserve
retry intent, so hasPendingSave() becomes false and flushPendingSave() may skip
retrying; to fix, introduce a retry-tracking flag (e.g.,
this.pendingSaveNeedsRetry or this.dirtySave) and update saveToDiskDebounced: on
save failure set this.pendingSaveNeedsRetry = true (and do not clear retry
intent), and on successful save clear it; update hasPendingSave() to return true
if this.pendingSave or this.pendingSaveNeedsRetry is set, and update
flushPendingSave() to attempt a retry when pendingSaveNeedsRetry is true (and
clear the flag only after a successful save). Ensure symbols mentioned
(saveToDiskDebounced, saveDebounceTimer, saveToDisk, pendingSave,
hasPendingSave, flushPendingSave) are updated consistently.
♻️ Duplicate comments (1)
test/index.test.ts (1)

3769-3770: ⚠️ Potential issue | 🟡 Minor

Legacy-disabled assertion still conflicts with normalized storage behavior.

This still expects disabledReason to be undefined for org-legacy, but this PR’s normalization contract treats legacy enabled: false entries without a reason as "user".

Suggested assertion fix
- expect(
-   revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
- ).toBeUndefined();
+ expect(
+   revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
+ ).toBe("user");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` around lines 3769 - 3770, The test incorrectly asserts
that revivedEntries.find((account) => account.accountId ===
"org-legacy")?.disabledReason is undefined despite the normalization logic
treating legacy entries with enabled: false and no reason as "user"; update the
assertion to expect "user" (or the normalized constant used in the codebase) for
the disabledReason of the account with accountId "org-legacy" so it matches the
normalization contract applied in the code that produces revivedEntries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@index.ts`:
- Around line 3867-3870: After calling
persistAccountPool(selection.variantsForPersistence, ...) you must invalidate or
refresh the in-memory AccountManager cache so the runtime sees the persisted
enable/disable changes immediately; locate the usage of persistAccountPool and
add a call to the AccountManager cache refresh method (e.g.
AccountManager.invalidateCache(), AccountManager.reloadFromDisk(), or
accountManager.refreshAccounts()) immediately after the await succeeds (and not
only in other auth paths), ensuring the same error handling behavior is
preserved so stale in-memory account state is not retained.

In `@test/index.test.ts`:
- Around line 361-374: The mock's disableAccountsWithSameRefreshToken does not
reset the per-refresh-token auth-failure counter like production does; update
disableAccountsWithSameRefreshToken to also clear the failure entry for the
given refreshToken (mirror lib/accounts.ts behavior) — e.g., after determining
refreshToken and before returning, remove or reset the auth-failure tracking
entry (the same map/key used by the production code) so that sibling accounts
have the same reset parity as production.

---

Outside diff comments:
In `@lib/accounts.ts`:
- Around line 1117-1131: The debounced save currently logs failures in
saveToDiskDebounced (inside saveDebounceTimer) but doesn't preserve retry
intent, so hasPendingSave() becomes false and flushPendingSave() may skip
retrying; to fix, introduce a retry-tracking flag (e.g.,
this.pendingSaveNeedsRetry or this.dirtySave) and update saveToDiskDebounced: on
save failure set this.pendingSaveNeedsRetry = true (and do not clear retry
intent), and on successful save clear it; update hasPendingSave() to return true
if this.pendingSave or this.pendingSaveNeedsRetry is set, and update
flushPendingSave() to attempt a retry when pendingSaveNeedsRetry is true (and
clear the flag only after a successful save). Ensure symbols mentioned
(saveToDiskDebounced, saveDebounceTimer, saveToDisk, pendingSave,
hasPendingSave, flushPendingSave) are updated consistently.

---

Duplicate comments:
In `@test/index.test.ts`:
- Around line 3769-3770: The test incorrectly asserts that
revivedEntries.find((account) => account.accountId ===
"org-legacy")?.disabledReason is undefined despite the normalization logic
treating legacy entries with enabled: false and no reason as "user"; update the
assertion to expect "user" (or the normalized constant used in the codebase) for
the disabledReason of the account with accountId "org-legacy" so it matches the
normalization contract applied in the code that produces revivedEntries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 247abe40-2571-417b-a590-a18aded5f2d7

📥 Commits

Reviewing files that changed from the base of the PR and between 31a7427 and feb5f29.

📒 Files selected for processing (4)
  • index.ts
  • lib/accounts.ts
  • test/accounts.test.ts
  • test/index.test.ts

Comment thread index.ts
Comment thread test/index.test.ts
Comment on lines +361 to +374
disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) {
const refreshToken = account.refreshToken;
let disabledCount = 0;
for (const storedAccount of this.accounts) {
if (storedAccount.refreshToken !== refreshToken) continue;
if (storedAccount.enabled === false) continue;
delete storedAccount.coolingDownUntil;
delete storedAccount.cooldownReason;
storedAccount.enabled = false;
storedAccount.disabledReason = "auth-failure";
disabledCount++;
}
return disabledCount;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Mock disable flow is still missing auth-failure counter reset parity.

disableAccountsWithSameRefreshToken in this mock disables siblings, but it does not mirror production’s per-refresh-token auth-failure reset (lib/accounts.ts clears the token’s failure entry). That gap can hide regressions around re-enable/retry flows after disable.

Suggested parity update
 class MockAccountManager {
+  private authFailuresByRefreshToken = new Map<string, number>();
+
-  clearAuthFailures() {}
-  incrementAuthFailures() { return 1; }
+  clearAuthFailures(account: { refreshToken?: string }) {
+    if (!account.refreshToken) return;
+    this.authFailuresByRefreshToken.delete(account.refreshToken);
+  }
+  incrementAuthFailures(account: { refreshToken?: string }) {
+    const refreshToken = account.refreshToken ?? "";
+    const current = this.authFailuresByRefreshToken.get(refreshToken) ?? 0;
+    const next = current + 1;
+    this.authFailuresByRefreshToken.set(refreshToken, next);
+    return next;
+  }

   disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) {
     const refreshToken = account.refreshToken;
     let disabledCount = 0;
     for (const storedAccount of this.accounts) {
       if (storedAccount.refreshToken !== refreshToken) continue;
       if (storedAccount.enabled === false) continue;
       delete storedAccount.coolingDownUntil;
       delete storedAccount.cooldownReason;
       storedAccount.enabled = false;
       storedAccount.disabledReason = "auth-failure";
       disabledCount++;
     }
+    if (refreshToken) {
+      this.authFailuresByRefreshToken.delete(refreshToken);
+    }
     return disabledCount;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` around lines 361 - 374, The mock's
disableAccountsWithSameRefreshToken does not reset the per-refresh-token
auth-failure counter like production does; update
disableAccountsWithSameRefreshToken to also clear the failure entry for the
given refreshToken (mirror lib/accounts.ts behavior) — e.g., after determining
refreshToken and before returning, remove or reset the auth-failure tracking
entry (the same map/key used by the production code) so that sibling accounts
have the same reset parity as production.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
test/index.test.ts (1)

361-374: ⚠️ Potential issue | 🟡 Minor

Keep grouped-disable mock in parity with production auth-failure reset.

disableAccountsWithSameRefreshToken in the test mock disables accounts but does not clear per-refresh-token auth-failure tracking, unlike production. That can mask regressions in post-disable recovery paths.

🧪 Suggested parity patch
 class MockAccountManager {
+	private authFailuresByRefreshToken = new Map<string, number>();
@@
-		clearAuthFailures() {}
-		incrementAuthFailures() { return 1; }
+		clearAuthFailures(account: { refreshToken?: string }) {
+			if (!account.refreshToken) return;
+			this.authFailuresByRefreshToken.delete(account.refreshToken);
+		}
+		incrementAuthFailures(account: { refreshToken?: string }) {
+			const refreshToken = account.refreshToken ?? "";
+			const current = this.authFailuresByRefreshToken.get(refreshToken) ?? 0;
+			const next = current + 1;
+			this.authFailuresByRefreshToken.set(refreshToken, next);
+			return next;
+		}
@@
 		disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) {
 			const refreshToken = account.refreshToken;
 			let disabledCount = 0;
@@
 			}
+			if (refreshToken) {
+				this.authFailuresByRefreshToken.delete(refreshToken);
+			}
 			return disabledCount;
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/index.test.ts` around lines 361 - 374, The test mock function
disableAccountsWithSameRefreshToken currently disables matching accounts but
does not clear the per-refresh-token auth-failure tracking, causing divergence
from production; update disableAccountsWithSameRefreshToken to also reset/clear
the auth-failure state for the given refreshToken (e.g., remove
this.failedAuthByRefreshToken[refreshToken] or call
this.authFailureTracker.reset(refreshToken)) in addition to deleting
coolingDownUntil/cooldownReason and setting enabled/disabledReason so the mock
mirrors production recovery behavior.
🧹 Nitpick comments (2)
test/accounts.test.ts (1)

1844-1916: Add one Windows-style transient lock fixture.

These new retry cases only simulate EBUSY. If the save recovery code also treats Windows lock failures like EPERM or EACCES as transient, that branch is still uncovered and could regress unnoticed.

Also applies to: 1918-2017, 2533-2601, 2780-2855, 2857-2920, 3001-3031

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/accounts.test.ts` around lines 1844 - 1916, The test only simulates a
Unix-style transient lock (EBUSY); add a Windows-style transient lock fixture by
inserting another mockSaveAccounts.mockImplementationOnce that rejects with a
Windows error (e.g. Object.assign(new Error("EPERM: file in use"), { code:
"EPERM" }) or { code: "EACCES" }) before the final successful
mockImplementationOnce, and update expectations (mockSaveAccounts call count and
which savedSnapshots index is asserted) so the sequence is: first pending reject
via rejectFirstSave, second reject with EPERM/EACCES, third a successful save;
refer to mockSaveAccounts.mockImplementationOnce, rejectFirstSave,
savedSnapshots, and AccountManager.saveToDiskDebounced to locate the change.
index.ts (1)

1818-1821: Extract repeated “reload manager from disk” logic into one helper.

The same if (cachedAccountManager) { loadFromDisk(); setCachedAccountManager(...) } block is duplicated in many places. A single helper reduces drift and keeps future cache-sync changes safe.

♻️ Refactor sketch
+		const reloadCachedAccountManagerFromDisk = async (): Promise<void> => {
+			if (!cachedAccountManager) return;
+			const reloadedManager = await AccountManager.loadFromDisk();
+			setCachedAccountManager(reloadedManager);
+		};

-		if (cachedAccountManager) {
-			const reloadedManager = await AccountManager.loadFromDisk();
-			setCachedAccountManager(reloadedManager);
-		}
+		await reloadCachedAccountManagerFromDisk();

Also applies to: 4159-4162, 5216-5219, 5460-5463, 5560-5563, 5637-5640, 5953-5956, 6047-6050

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@index.ts` around lines 1818 - 1821, Extract the repeated "reload manager from
disk" logic into a single helper: create a function (e.g.
reloadCachedAccountManagerIfPresent or ensureCachedAccountManagerReloaded) that
checks cachedAccountManager, calls AccountManager.loadFromDisk(), and invokes
setCachedAccountManager(reloadedManager); then replace every duplicated block
(instances referencing cachedAccountManager, AccountManager.loadFromDisk,
setCachedAccountManager) with a call to that helper so all reload behavior is
centralized and future cache-sync changes only need to touch the helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lib/accounts.ts`:
- Around line 1073-1075: hasPendingSave() currently returns true only if
saveDebounceTimer or pendingSave are non-null and ignores the saveRetryNeeded
flag, so after a failed save it can wrongly report no pending work; update the
hasPendingSave() method to also check this.saveRetryNeeded (e.g., return
this.saveDebounceTimer !== null || this.pendingSave !== null ||
this.saveRetryNeeded === true) so flushPendingSave()/manager-settled/prune logic
will treat retry-needed state as pending work.

In `@test/accounts.test.ts`:
- Around line 951-969: The test currently sets stored.version = 3 so it
exercises current-version normalization only; change the fixture to use a legacy
version (set stored.version to 1 or 2) and ensure the account object lacks a
disabledReason so the migration path runs; update the assertions to verify that
the migration logic produces the expected disabledReason and that saveAccounts
(mocked via saveAccounts) is invoked with the migrated data. Target the stored
variable in this test and keep the mocked saveAccounts behavior
(mockResolvedValueOnce) while asserting the migration result.
- Around line 39-41: The "flushes pending debounced save" test is missing a
mockClear, causing stale calls to mockSaveAccounts to leak between tests; update
setup to clear the saveAccounts mock (mockSaveAccounts.mockClear()) before tests
run (either add mockSaveAccounts.mockClear() to the existing beforeEach that
currently clears accountLoggerWarn, or call mockSaveAccounts.mockClear() at the
start of the "flushes pending debounced save" test before mockResolvedValue())
so that expect(mockSaveAccounts).toHaveBeenCalled() only asserts on calls made
by the current test.

---

Duplicate comments:
In `@test/index.test.ts`:
- Around line 361-374: The test mock function
disableAccountsWithSameRefreshToken currently disables matching accounts but
does not clear the per-refresh-token auth-failure tracking, causing divergence
from production; update disableAccountsWithSameRefreshToken to also reset/clear
the auth-failure state for the given refreshToken (e.g., remove
this.failedAuthByRefreshToken[refreshToken] or call
this.authFailureTracker.reset(refreshToken)) in addition to deleting
coolingDownUntil/cooldownReason and setting enabled/disabledReason so the mock
mirrors production recovery behavior.

---

Nitpick comments:
In `@index.ts`:
- Around line 1818-1821: Extract the repeated "reload manager from disk" logic
into a single helper: create a function (e.g.
reloadCachedAccountManagerIfPresent or ensureCachedAccountManagerReloaded) that
checks cachedAccountManager, calls AccountManager.loadFromDisk(), and invokes
setCachedAccountManager(reloadedManager); then replace every duplicated block
(instances referencing cachedAccountManager, AccountManager.loadFromDisk,
setCachedAccountManager) with a call to that helper so all reload behavior is
centralized and future cache-sync changes only need to touch the helper.

In `@test/accounts.test.ts`:
- Around line 1844-1916: The test only simulates a Unix-style transient lock
(EBUSY); add a Windows-style transient lock fixture by inserting another
mockSaveAccounts.mockImplementationOnce that rejects with a Windows error (e.g.
Object.assign(new Error("EPERM: file in use"), { code: "EPERM" }) or { code:
"EACCES" }) before the final successful mockImplementationOnce, and update
expectations (mockSaveAccounts call count and which savedSnapshots index is
asserted) so the sequence is: first pending reject via rejectFirstSave, second
reject with EPERM/EACCES, third a successful save; refer to
mockSaveAccounts.mockImplementationOnce, rejectFirstSave, savedSnapshots, and
AccountManager.saveToDiskDebounced to locate the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f72d1c86-7c7d-48cd-86cb-d99cd6a2b947

📥 Commits

Reviewing files that changed from the base of the PR and between feb5f29 and b979b68.

📒 Files selected for processing (4)
  • index.ts
  • lib/accounts.ts
  • test/accounts.test.ts
  • test/index.test.ts

Comment thread lib/accounts.ts
Comment on lines +1073 to +1075
hasPendingSave(): boolean {
return this.saveDebounceTimer !== null || this.pendingSave !== null;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Track retry-needed state as pending work.

hasPendingSave() ignores saveRetryNeeded. After a failed save, Line 1074 can return false even though flushPendingSave() still has retry work, which can let manager-settled/prune logic drop an instance before retry/flush.

💡 Suggested fix
 hasPendingSave(): boolean {
-	return this.saveDebounceTimer !== null || this.pendingSave !== null;
+	return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
hasPendingSave(): boolean {
return this.saveDebounceTimer !== null || this.pendingSave !== null;
}
hasPendingSave(): boolean {
return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/accounts.ts` around lines 1073 - 1075, hasPendingSave() currently returns
true only if saveDebounceTimer or pendingSave are non-null and ignores the
saveRetryNeeded flag, so after a failed save it can wrongly report no pending
work; update the hasPendingSave() method to also check this.saveRetryNeeded
(e.g., return this.saveDebounceTimer !== null || this.pendingSave !== null ||
this.saveRetryNeeded === true) so flushPendingSave()/manager-settled/prune logic
will treat retry-needed state as pending work.

Comment thread test/accounts.test.ts
Comment on lines +39 to +41
beforeEach(() => {
accountLoggerWarn.mockClear();
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the file structure and locate test/accounts.test.ts
fd -t f "accounts.test.ts" | head -20

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 94


🏁 Script executed:

# Check the beginning of the test file to see the imports and mock setup
head -60 test/accounts.test.ts

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 1985


🏁 Script executed:

# Look at lines 39-41 specifically and the surrounding context
sed -n '30,50p' test/accounts.test.ts

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 656


🏁 Script executed:

# Get more context around the saveAccounts mock - let's see how it's defined
rg "saveAccounts" test/accounts.test.ts -B 2 -A 2 | head -50

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 1906


🏁 Script executed:

# Now check around line 2317 where the flush assertion is mentioned
sed -n '2310,2325p' test/accounts.test.ts

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 585


🏁 Script executed:

# Let's look at the full "flushes pending debounced save" test
sed -n '2310,2370p' test/accounts.test.ts

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 2032


🏁 Script executed:

# Let's also check if there's any test that uses saveAccounts without calling mockClear
rg "saveAccounts\)" test/accounts.test.ts | grep -v "mockClear" | head -20

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 1208


🏁 Script executed:

# Let's check what comes before and after line 2317 more carefully to find the flush test
sed -n '2300,2350p' test/accounts.test.ts

Repository: ndycode/oc-chatgpt-multi-auth

Length of output: 1642


Add mockClear() to the "flushes pending debounced save" test.

The "flushes pending debounced save" test (line ~2317) does not clear saveAccounts before asserting on it. Unlike all other tests in the suite, it calls mockResolvedValue() without first calling mockClear(). Since beforeEach only clears accountLoggerWarn, the assertion expect(mockSaveAccounts).toHaveBeenCalled() can pass on stale calls from previous tests instead of only the current invocation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/accounts.test.ts` around lines 39 - 41, The "flushes pending debounced
save" test is missing a mockClear, causing stale calls to mockSaveAccounts to
leak between tests; update setup to clear the saveAccounts mock
(mockSaveAccounts.mockClear()) before tests run (either add
mockSaveAccounts.mockClear() to the existing beforeEach that currently clears
accountLoggerWarn, or call mockSaveAccounts.mockClear() at the start of the
"flushes pending debounced save" test before mockResolvedValue()) so that
expect(mockSaveAccounts).toHaveBeenCalled() only asserts on calls made by the
current test.

Comment thread test/accounts.test.ts
Comment on lines +951 to +969
it("defaults legacy disabled accounts to a user disable reason", async () => {
const { saveAccounts } = await import("../lib/storage.js");
const mockSaveAccounts = vi.mocked(saveAccounts);
mockSaveAccounts.mockClear();
mockSaveAccounts.mockResolvedValueOnce();

const now = Date.now();
const stored = {
version: 3 as const,
activeIndex: 0,
accounts: [
{
refreshToken: "token-1",
enabled: false,
addedAt: now,
lastUsed: now,
},
],
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This "legacy" case never hits the migration path.

With version: 3, this test only verifies current-version normalization. It won't catch regressions in the V1/V2 → V3 defaulting behavior this PR is adding for disabledReason.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/accounts.test.ts` around lines 951 - 969, The test currently sets
stored.version = 3 so it exercises current-version normalization only; change
the fixture to use a legacy version (set stored.version to 1 or 2) and ensure
the account object lacks a disabledReason so the migration path runs; update the
assertions to verify that the migration logic produces the expected
disabledReason and that saveAccounts (mocked via saveAccounts) is invoked with
the migrated data. Target the stored variable in this test and keep the mocked
saveAccounts behavior (mockResolvedValueOnce) while asserting the migration
result.

@ndycode ndycode closed this Apr 6, 2026
ndycode added a commit that referenced this pull request Apr 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants